combat bias
ChatGPT will now combat bias with new measures put forth by OpenAI
Fox News Correspondent, William La Jeunesse, joins'Fox News Sunday' to discuss the evolution of A.I. and the push lawmakers are making to regulate it. OpenAI has announced a set of new measures to combat bias within its suite of products, including ChatGPT. The artificial intelligence (AI) company recently unveiled an updated Model Spec, a document that defines how OpenAI wants its models to behave in ChatGPT and the OpenAI API. The company says this iteration of the Model Spec builds on the foundational version released last May. "I think with a tool as powerful as this, one where people can access all sorts of different information, if you really believe we're moving to artificial general intelligence (AGI) one day, you have to be willing to share how you're steering the model," Laurentia Romaniuk, who works on model behavior at OpenAI, told Fox News Digital.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
How IBM Is Using Artificial Intelligence to Combat Bias in Advertising - Flipboard
IBM is trying to figure out how to use artificial intelligence to identify bias in advertising — and mitigate it. Coming up with an equitable ad industry is better for both consumers and companies, said Sheri Bachstein, CEO of The Weather Company and general manager of IBM Watson Advertising. She told Cheddar the initiative's research has been able to identify bias in advertising. Now it's working on how to avoid misuse.. See more videos about Videos, Advertising, Business, IBM, Technology, AdTech.
Microsoft tweaks facial-recognition tech to combat bias
Microsoft's facial-recognition technology is getting smarter at recognizing people with darker skin tones. On Tuesday, the company touted the progress, though it comes amid growing worries that these technologies will enable surveillance against people of color. Microsoft's announcement didn't broach the concerns; the company merely addressed how its facial-recognition tech could misidentify both men and women with darker skin tones. Microsoft has recently reduced the system's error rates by up to 20 times. In February, research from MIT and Stanford University highlighted how facial-recognition technologies can be built with bias. The study found that Microsoft's own system was 99 percent accurate when it came to identifying the gender of lighter-skinned people, but only 87 percent accurate for darker-skinned subjects.